Applied Bayesian Analyses in R
Part2
WAMBS checklist
When to Worry and How to Avoid Misuse of Bayesian Statistics
by Laurent Smeets and Rens van der Schoot
Before estimating the model:
- Do you understand the priors?
After estimation before inspecting results:
- Does the trace-plot exhibit convergence?
- Does convergence remain after doubling the number of iterations?
- Does the posterior distribution histogram have enough information?
- Do the chains exhibit a strong degree of autocorrelation?
- Do the posterior distributions make substantive sense?
Understanding the exact influence of the priors
- Do different specification of the multivariate variance priors influence the results?
- Is there a notable effect of the prior when compared with non-informative priors?
- Are the results stable from a sensitivity analysis?
- Is the Bayesian way of interpreting and reporting model results used?
Tutorial source: https://www.rensvandeschoot.com/brms-wambs/ Alternatives exist as well like the BARG framework (Kruschke, J.K. Bayesian Analysis Reporting Guidelines. Nat Hum Behav 5, 1282–1291 (2021). https://doi.org/10.1038/s41562-021-01177-7)
WAMBS Template to use
File called WAMBS_workflow_MarathonData.qmd (quarto document)
Click here for the Quarto version
Create your own project and project folder
Copy the template and rename it
We will go through the different parts in the slide show
You can apply/adapt the code in the template
To render the document properly with references, you also need the references.bib file
Side-path: projects in RStudio and the here package
If you do not know how to use Projects in RStudio or the here package, these two sources might be helpfull:
Projects: https://youtu.be/MdTtTN8PUqU?si=mmQGlU063EMt86B2
here package: https://youtu.be/oh3b3k5uM7E?si=0-heLJXfFVLtTohh
Preparations for applying it to Marathon model
Packages needed:
library(here)
library(tidyverse)
library(brms)
library(bayesplot)
library(ggmcmc)
library(patchwork)
library(priorsense)Preparations for applying it to Marathon model
Load the dataset and the model:
load(
file = here("Presentations", "MarathonData.RData")
)
MarathonTimes_Mod2 <-
readRDS(file =
here("Presentations",
"Output",
"MarathonTimes_Mod2.RDS")
)Focus on the priors before estimation
Remember: priors come in many disguises
Uninformative/Weakly informative
When objectivity is crucial and you want let the data speak for itself…
Informative
When including significant information is crucial
- previously collected data
- results from former research/analyses
- data of another source
- theoretical considerations
- elicitation
brms defaults
Weakly informative priors
If dataset is big, impact of priors is minimal
But, always better to know what you are doing!
Complex models might run into convergence issues \(\rightarrow\) specifying more informative priors might help!
So, how to deviate from the defaults?
Check priors used by brms
Function: get_prior( )
Remember our model 2 for Marathon Times:
\[\begin{aligned} & \text{MarathonTimeM}_i \sim N(\mu,\sigma_e)\\ & \mu = \beta_0 + \beta_1*\text{km4week}_i + \beta_2*\text{sp4week}_i \end{aligned}\]
get_prior(
MarathonTimeM ~ 1 + km4week + sp4week,
data = MarathonData
)Check priors used by brms
prior: type of prior distributionclass: parameter class (withbbeing population-effects)coef: name of the coefficient within parameter classgroup: grouping factor for group-level parameters (when using mixed effects models)resp: name of the response variable when using multivariate modelslb&ub: lower and upper bound for parameter restriction
Visualizing priors
The best way to make sense of the priors used is visualizing them!
Many options:
- The Zoo of Distributions https://ben18785.shinyapps.io/distribution-zoo/
- making your own visualizations
See WAMBS template!
There we demonstrate the use of ggplot2, metRology, ggtext and patchwork to visualize the priors.
Visualizing priors
library(metRology)
library(ggplot2)
library(ggtext)
library(patchwork)
# Setting a plotting theme
theme_set(theme_linedraw() +
theme(text = element_text(family = "Times", size = 8),
panel.grid = element_blank(),
plot.title = element_markdown())
)
# Generate the plot for the prior of the Intercept (mu)
Prior_mu <- ggplot( ) +
stat_function(
fun = dt.scaled, # We use the dt.scaled function of metRology
args = list(df = 3, mean = 199.2, sd = 24.9), #
xlim = c(120,300)
) +
scale_y_continuous(name = "density") +
labs(title = "Prior for the intercept",
subtitle = "student_t(3,199.2,24.9)")
# Generate the plot for the prior of the error variance (sigma)
Prior_sigma <- ggplot( ) +
stat_function(
fun = dt.scaled, # We use the dt.scaled function of metRology
args = list(df = 3, mean = 0, sd = 24.9), #
xlim = c(0,6)
) +
scale_y_continuous(name = "density") +
labs(title = "Prior for the residual variance",
subtitle = "student_t(3,0,24.9)")
# Generate the plot for the prior of the effects of independent variables
Prior_betas <- ggplot( ) +
stat_function(
fun = dnorm, # We use the normal distribution
args = list(mean = 0, sd = 10), #
xlim = c(-20,20)
) +
scale_y_continuous(name = "density") +
labs(title = "Prior for the effects of independent variables",
subtitle = "N(0,10)")
Prior_mu + Prior_sigma + Prior_betas +
plot_layout(ncol = 3)Understanding priors… another example
Experimental study (pretest - posttest design) with 3 conditions:
- control group;
- experimental group 1;
- experimental group 2.
Model:
\[\begin{aligned} & Posttest_{i} \sim N(\mu,\sigma_{e_{i}})\\ & \mu = \beta_0 + \beta_1*\text{Pretest}_{i} + \beta_2*\text{Exp_cond1}_{i} + \beta_3*\text{Exp_cond2}_{i} \end{aligned}\]
Our job: coming up with priors that reflect that we expect both conditions to have a positive effect (hypothesis based on literature) and no indications that one experimental condition will outperform the other.
Understanding priors… another example
- Assuming pre- and posttest scores are standardized
- Assuming no increase between pre- and posttest in control condition
Understanding priors… another example
- Assuming a strong correlation between pre- and posttest
Understanding priors… another example
- Assuming a small effect of experimental conditions
- No difference between both experimental conditions
Remember Cohen’s d: 0.2 = small effect size; 0.5 = medium effect size; 0.8 or higher = large effect size
Setting custom priors in brms
Setting our custom priors can be done with set_prior( ) command
E.g., change the priors for the beta’s (effects of km4week and sp4week):
Custom_priors <-
c(
set_prior(
"normal(0,10)",
class = "b",
coef = "km4week"),
set_prior(
"normal(0,10)",
class = "b",
coef = "sp4week")
)Prior Predictive Check
Did you set sensible priors?
- Simulate data based on the model and the priors
- Visualize the simulated data and compare with real data
- Check if the plot shows impossible simulated datasets
Prior Predictive Check in brms
Step 1: Fit the model with custom priors with option sample_prior="only"
Prior Predictive Check in brms
Step 2: visualize the data with the pp_check( ) function
set.seed(1975)
pp_check(
Fit_Model_priors,
ndraws = 300) # number of simulated datasets you wish forCheck some summary statistics
How are summary statistics of simulated datasets (e.g., median, min, max, …) distributed over the datasets?
How does that compare to our real data?
Use
type = "stat"argument withinpp_check()
pp_check(Fit_Model_priors,
type = "stat",
stat = "median")Your Turn
Your data and model
Perform a prior predictive check
If necessary re-think your priors and check again
Focus on convergence of the model (before interpreting the model!)
Does the trace-plot exhibits convergence?
Create custom trace-plots (aka caterpillar plots) with ggs( ) function from ggmcmc package
Model_chains <- ggs(MarathonTimes_Mod2)
Model_chains %>%
filter(Parameter %in% c(
"b_Intercept",
"b_km4week",
"b_sp4week",
"sigma"
)
) %>%
ggplot(aes(
x = Iteration,
y = value,
col = as.factor(Chain)))+
geom_line() +
facet_grid(Parameter ~ .,
scale = 'free_y',
switch = 'y') +
labs(title = "Caterpillar Plots for the parameters",
col = "Chains")Does convergence remain after doubling the number of iterations?
Re-fit the model with more iterations
Check trace-plots again
First consider the need to do this! If you have a complex model that already took a long time to run, this check will take at least twice as much time…
Your Turn
- Your data and model
- Do the first checks on the model convergence
R-hat statistics
Sampling of parameters done by:
- multiple chains
- multiple iterations within chains
If variance between chains is big \(\rightarrow\) NO CONVERGENCE
R-hat (\(\widehat{R}\)) : compares the between- and within-chain estimates for model parameters
R-hat statistics
\(\widehat{R}\) < 1.015 for each parameter estimate
at least 4 chains are recommended
Effective Sample Size (ESS) > 400 to rely on \(\widehat{R}\)
R-hat in brms
mcmc_rhat() function from the bayesplot package
mcmc_rhat(
brms::rhat(MarathonTimes_Mod2),
size = 3
)+
yaxis_text(hjust = 1) # to print parameter namesYour Turn
Your data and model
Check the R-hat statistics
Autocorrelation
Sampling of parameter values are not independent!
So there is autocorrelation
But you don’t want too much impact of autocorrelation
2 approaches to check this:
- ratio of the effective sample size to the total sample size
- plot degree of autocorrelation
Ratio effective sample size / total sample size
Should be higher than 0.1 (Gelman et al., 2013)
Visualize making use of the
mcmc_neff( )function frombayesplot
mcmc_neff(
neff_ratio(MarathonTimes_Mod2)
) +
yaxis_text(hjust = 1) # to print parameter namesPlot degree of autocorrelation
- Visualize making use of the
mcmc_acf( )function
mcmc_acf(
as.array(MarathonTimes_Mod2),
regex = "b") # to plot only the parameters starting with b (our beta's)Your Turn
Your data and model
Check the autocorrelation
Rank order plots
additional way to assess the convergence of MCMC
if the algorithm converged, plots of all chains look similar
mcmc_rank_hist(
MarathonTimes_Mod2,
regex = "b" # only intercept and beta's
) Your Turn
Your data and model
Check the rank order plots
Focus on the Posterior
Does the posterior distribution histogram have enough information?
Histogram of posterior for each parameter
Have clear peak and sliding slopes
Plotting the posterior distribution histogram
Step 1: create a new object with ‘draws’ based on the final model
posterior_PD <- as_draws_df(MarathonTimes_Mod2)Plotting the posterior distribution histogram
Step 2: create histogram making use of that object
post_intercept <-
posterior_PD %>%
select(b_Intercept) %>%
ggplot(aes(x = b_Intercept)) +
geom_histogram() +
ggtitle("Intercept")
post_km4week <-
posterior_PD %>%
select(b_km4week) %>%
ggplot(aes(x = b_km4week)) +
geom_histogram() +
ggtitle("Beta km4week")
post_sp4week <-
posterior_PD %>%
select(b_sp4week) %>%
ggplot(aes(x = b_sp4week)) +
geom_histogram() +
ggtitle("Beta sp4week") Plotting the posterior distribution histogram
Step 3: print the plot making use of patchwork ’s workflow to combine plots
post_intercept + post_km4week + post_sp4week +
plot_layout(ncol = 3)Posterior Predictive Check
Generate data based on the posterior probability distribution
Create plot of distribution of y-values in these simulated datasets
Overlay with distribution of observed data
using pp_check() again, now with our model
pp_check(MarathonTimes_Mod2,
ndraws = 100)Posterior Predictive Check
- We can also focus on some summary statistics (like we did with prior predictive checks as well)
pp_check(MarathonTimes_Mod2,
ndraws = 300,
type = "stat",
stat = "median")Your Turn
Your data and model
Focus on the posterior and do some checks!
Prior sensibility analyses
Why prior sensibility analyses?
Often we rely on ‘arbitrary’ chosen (default) weakly informative priors
What is the influence of the prior (and the likelihood) on our results?
You could ad hoc set new priors and re-run the analyses and compare (a lot of work, without strict sytematical guidelines)
Semi-automated checks can be done with
priorsensepackage
Using the priorsense package
Recently a package dedicated to prior sensibility analyses is launched
# install.packages("remotes")
remotes::install_github("n-kall/priorsense")Key-idea: power-scaling (both prior and likelihood)
background reading:
YouTube talk:
Basic table with indices
First check is done by using the powerscale_sensitivity( ) function
column prior contains info on sensibility for prior (should be lower than 0.05)
column likelihood contains info on sensibility for likelihood (that we want to be high, ‘let our data speak’)
column diagnosis is a verbalization of potential problem (- if none)
powerscale_sensitivity(MarathonTimes_Mod2)Sensitivity based on cjs_dist:
# A tibble: 4 × 4
variable prior likelihood diagnosis
<chr> <dbl> <dbl> <chr>
1 b_Intercept 0.000853 0.0851 -
2 b_km4week 0.000512 0.0802 -
3 b_sp4week 0.000370 0.0831 -
4 sigma 0.00571 0.151 -
Visualization of prior sensibility
powerscale_plot_dens(
powerscale_sequence(
MarathonTimes_Mod2
),
variable = c(
"b_Intercept",
"b_km4week",
"b_sp4week"
)
)Visualization of prior sensibility
powerscale_plot_quantities(
powerscale_sequence(
MarathonTimes_Mod2
),
variable = c(
"b_km4week"
)
)Your Turn
Your data and model
Check the prior sensibility of your results